Machine Learning Cheat Sheet: Classical equations, diagrams and tricks in machine learning by Zhang Wei

Machine Learning Cheat Sheet: Classical equations, diagrams and tricks in machine learning by Zhang Wei

Author:Zhang, Wei [Zhang, Wei]
Language: eng
Format: epub
Publisher: UNKNOWN
Published: 2020-03-21T16:00:00+00:00


Fig. 10.1: (a) A simple DAG on 5 nodes, numbered in topological order. Node 1 is the root, nodes 4 and 5 are

the leaves. (b) A simple undirected graph, with the following maximal cliques: 1,2,3, 2,3,4, 3,5. 55

10.2 Examples

p

(

x

h

|

x

v

,

θ

) =

p(xh ,xv |θ)= p(xh ,xv |θ) ∑x′ p(x′ ,xv |θ)(10.7) p(xv |θ)h h

10.2.1 Naive Bayes classifiers

Fig. 10.2: (a) A naive Bayes classifier represented as a DGM. We assume there are D = 4 features, for

simplicity. Shaded nodes are observed, unshaded nodes are hidden. (b) Tree-augmented naive Bayes classifier for

D = 4 features. In general, the tree topology can change depending on the value of y.

Sometimes only some of the hidden variables are of interest to us. So let us partition the hidden variables into query variables, xq , whose value we wish to know, and the remaining nuisance variables, xn , which we are not interested in. We can compute what we are interested in by marginalizing out the nuisance variables:

p(xq |xv ,θ) =∑p(xq ,xn |xv ,θ) (10.8) x n

10.4 Learning

MAP estimate:

ˆ N

θ = arg max∑logp(xi,v |θ) +logp(θ) (10.9) θ i=1

10.4.1 Learning from complete data 10.2.2 Markov and hidden Markov models

Fig. 10.3: A first and second order Markov chain.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.